-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FLAIR#2 Dataset and Datamodule Integration #2394
base: main
Are you sure you want to change the base?
Conversation
…mg and msk) Updates in the custom raster dataset tutorial and the actual file documentation. The previous recommended approach (overriding `__get_item__`) is outdated. Refs: microsoft#2292 (reply in thread)
Co-authored-by: Adam J. Stewart <[email protected]>
Co-authored-by: Adam J. Stewart <[email protected]>
… type individually
Not fully functioning yet, contains copy paste from other datasets
Additionally, some small code refactors are done
…d refine plotting
Using the entire sentinel-2 image and a matplotlib patch to debug, otherwise it is really hard to find correct spot due to low resolution
…y()` for sentinel With the nested dict, it was not possible to download dynamically
md5s might change due to timestamps, this eases the process of changing md5
torchgeo/datasets/flair2.py
Outdated
to_extract.append(dir_name) | ||
continue | ||
|
||
files_glob = os.path.join(downloaded_path, "**", self.globs[train_or_test]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When instantiating the datamodule, it will extract the flair_sen_train
zip-file on every run. I traced it here, and I believe it's becuase when going through when dir_name="flair_sen_train"
, I get files_glob="/home/jhj/dataset_test/FLAIR2/flair_sen_train/**/SEN2_*{0}.npy"
, with the curly braces at the end. This makes the glob.glob()
fail in the next line. I tried manually setting it to files_glob="/home/jhj/dataset_test/FLAIR2/flair_sen_train/**/SEN2_*.npy"
, which fixed the issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it's the same with flair_2_sen_test
, which also gets the {0}
in files_glob
, and extracts the zip-file on every run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When instantiating the datamodule, it will extract the
flair_sen_train
zip-file on every run. I traced it here, and I believe it's becuase when going through whendir_name="flair_sen_train"
, I getfiles_glob="/home/jhj/dataset_test/FLAIR2/flair_sen_train/**/SEN2_*{0}.npy"
, with the curly braces at the end. This makes theglob.glob()
fail in the next line. I tried manually setting it tofiles_glob="/home/jhj/dataset_test/FLAIR2/flair_sen_train/**/SEN2_*.npy"
, which fixed the issue.
I see why the unexpected behavior appears. For clarification: the format string is necessary, as there are two files inside the directory: mask
and data
files. To be able to respectively get only the corresponding mask or data file, I had to format the glob.
Would you be willing to share with me your code snippet so I can debug this real quick?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just manually removed the {0}
in the files_glob
string before running glob.glob()
:
files_glob = os.path.join(downloaded_path, "**", self.globs[train_or_test])
if "flair_sen_train" in files_glob:
files_glob="/home/jhj/dataset_test/FLAIR2/flair_sen_train/**/SEN2_*.npy"
if not glob.glob(files_glob, recursive=True):
to_extract.append(dir_name)
I.e., hard-coded files_glob
so I could check if it stopped extracting the zip-file on every run. Not really a solution, but just wanted to see if I was the right place in the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just tested it again, and all download and data extraction works now 👍
Apologize if the comments were a bit scattered, and not really a proper review. I was just trying to run the code, and commented on the go with any issues I encountered 🙂 |
flair_2_centroids_sp_to_patch.zip vs. flair-2_centroids_sp_to_patch.json Refs: microsoft#2394 (comment)
Weird; I was sure I tried this. Thanks for letting me know. |
Instead of loading both sentinel-data/cloudsnowmasks into one sample, store them sperately. Same with the crop_indices.
Yeah, it seems like a weird error. Perhaps they changed it on their side, such that the zip-file naming was consistent. Although the json file inside still uses hyphen instead of underscore 🙂 |
torchgeo/datasets/flair2.py
Outdated
files = [ | ||
dict(image=image, sentinel=sentinel, mask=mask) | ||
for image, sentinel, mask in zip(images, sentinels, masks) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this part filters out most of the aerial images. I think it's because each Sentinel-2 image covers multiple aerial images. E.g., flair_aerial_train/D004_2021/Z1_NN
contains 100 aerial images, and the corresponding flair_sen_train/D004_2021/Z1_NN
contains one .npy file. So the association between Sentinel-2 and the aerial images should probably be done on a per-folder basis instead of per-file basis. E.g., something like:
sentinel_lookup = {"/".join(s["data"].split("/")[-4:-2]): s for s in sentinels}
files = [
dict(image=image,
sentinel=sentinel_lookup["/".join(image.split("/")[-4:-2])],
mask=mask)
for image, mask in zip(images, masks)
]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow! What a severe mistake. Thanks for clarifying. Your approach seems very valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That happens sometimes 🙂
I just tried training a model on the aerial data now with the code posted above, and all seems to be working. I haven't looked closely at the Sentinel-2 data, although they looked correct when I sampled a couple of elements in the files
list in the debugger.
This bug caused to omit a good 90% of all images. Refs: microsoft#2394 (comment)
It got all the correct aerial imagery files now and the download and extraction is working. However, I'm getting a much more annoying bug now: I've had this issue before with Rasterio, and it was due to concurrency issues. This shouldn't really happen here, as the dataloader just iterates through a list of files, and each sample is a separate tif file. I can see there's also been a discussion here (#594), so perhaps @adamjstewart or @calebrob6 knows what's going on. My own guess is that it could be multiple workers having GDAL do simultaneous scans of the same directory when opening individual files in said directory. I.e., there's often many image files in each sub-folder in the dataset, and GDAL will scan the sub-folder for metadata when opening a file inside it, which could be the culprit. For now at least, I've added the code below in This has been added below the imports in the top:
I've also minimized the time the datareader is open in Rasterio by closing it before doing the tensor operations. This shouldn't really solve the issue here, but added it anyways for good practice. In the
and in the
Not sure if it fixes it, but I'll train a handful of models and try it out. The issue coming from simultaneous directory scans is a bit of a guess. |
Never had this multiprocessing issue before, and I don't see anything in the code that looks sus. Can you reproduce this issue with the unit tests? If not I can also try downloading the dataset and reproducing locally. |
Unfortunately not. It pops up at random after training for quite a while. Encountered it the first time after having trained for almost 2 full epochs. I.e., it had already read the entire dataset once, and then got IReadBlock error during second epoch. There have been some reports with similar errors here: rasterio/rasterio#2053 . That's generally when reading the same file though, and I agree, nothing looks suspicious in the code. I actually thought my disk was broken, but then I remembered that that was my exact same thought last time I got this error with Rasterio. I'll try and let it run some more times and see if it keeps happening. Just saw in rasterio/rasterio#2053 (comment) that upgrading to newer version of GDAL might help. I'll also give that a try. |
Otherwise the entire file is changed, hence this makes the git-history more traceable
Finally got it debugged, and it was a system error. Download, extract, and training on the aerial data seems to all work as it should. Sorry about the noise! Completely unrelated to this PR, but it was a corruption in one of the zpools in ZFS. So if you start using that at some point, and you get weird errors that looks like disk errors, but checking the disk says everything is fine, then try checking the status on your zpools. Spent too much time identifying this 😑 |
Can you resolve the merge conflicts so we can run CI on this PR? |
…l and minor bug fixes
…ing inconsistencies in original dataset Naming inconsistencies: `flair-2_centroids_sp_to_patch.json` vs `flair_2_centroids_sp_to_patch.zip`
I think everything should be on track so far. I am getting a ruff error for unsorted tuples in the |
If you use ruff 0.8.0 it will sort |
"""Get statistics (min, max, means, stdvs) for each used band in order. | ||
|
||
Args: | ||
split (str): Split for which to get statistics (currently only for train) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the docstring notion in torchgeo, we do not include the type in the docstring again, only function arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will also resolve the failing docs test
return tensor | ||
|
||
def _load_sentinel(self, path: Path) -> Tensor: | ||
# FIXME: should this really be returned as a tuple? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be removed, right?
self.root, | ||
md5=self.md5s.get(url, None) if self.checksum else None, | ||
) | ||
# FIXME: Why is download_url not checking integrity (tests run through)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this fixed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was still an issue last time i checked. As mentioned in the first text of the PRQ, omehow, when I run the pytests with wrong md5 hashes, the integrity is not checked unless I explicitly call it here again (it is implicity called in download_url
).
self.root, | ||
md5=self.md5s.get(url, None) if self.checksum else None, | ||
) | ||
# FIXME: Why is download_url not checking integrity (tests run through)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about this FIXME?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left this here as somewhat of a reminder, because it has the same behavior as the parent class. I.e. currently all tests pass even with wrong md5s.
""" | ||
super().__init__(FLAIR2, batch_size, num_workers, **kwargs) | ||
|
||
self.patch_size = _to_tuple(patch_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's a good idea. Could either be included here, or in a separate PR.
@@ -0,0 +1,79 @@ | |||
"""This module contains the FLAIR2DataModule class for loading the FLAIR2 dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this also needs the microsoft heading
@@ -0,0 +1,8 @@ | |||
/home/mathias/Dev/forks/torchgeo/tests/data/flair2/FLAIR2/flair_2_labels_test.zip: b13c4a3cb7ebb5cadddc36474bb386f9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe remove the personal directory from this text file.
|
||
rgb_indices = [self.all_bands.index(band) for band in self.rgb_bands] | ||
# Check if RGB bands are present in self.bands | ||
if not all([band in self.bands for band in self.rgb_bands]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Codecoverage is indicating that the RGB Band Missing is not being hit, so I think you just need to add a separate plot test similar to
torchgeo/tests/datasets/test_eurosat.py
Line 110 in 2f3e8fd
def test_plot_rgb(self, dataset: EuroSAT, tmp_path: Path) -> None: |
K.Normalize(mean=self.mean, std=self.std), data_keys=['image', 'mask'] | ||
) | ||
|
||
self.augs = augs if augs is not None else self.aug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is self.augs
intended to act on the data somewhere? My immediate thought was that it would be related to the augmentations part of the base datamodule:
torchgeo/torchgeo/datamodules/geo.py
Lines 70 to 79 in fedf993
# Data augmentation | |
Transform = Callable[[dict[str, Tensor]], dict[str, Tensor]] | |
self.aug: Transform = K.AugmentationSequential( | |
K.Normalize(mean=self.mean, std=self.std), data_keys=None, keepdim=True | |
) | |
self.train_aug: Transform | None = None | |
self.val_aug: Transform | None = None | |
self.test_aug: Transform | None = None | |
self.predict_aug: Transform | None = None |
def _load_image(self, path: Path) -> Tensor: | ||
"""Load a single image. | ||
|
||
Args: | ||
path: path to the image | ||
|
||
Returns: | ||
Tensor: the loaded image | ||
""" | ||
with rasterio.open(path) as f: | ||
array: np.typing.NDArray[np.int_] = f.read() | ||
tensor = torch.from_numpy(array).float() | ||
if 'B05' in self.bands: | ||
# Height channel will always be the last dimension | ||
tensor[-1] = torch.div(tensor[-1], 5) | ||
|
||
return tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the bands perhaps be extracted here based on self.bands
? E.g., something like
def _load_image(self, path: Path) -> Tensor:
"""Load a single image.
Args:
path: path to the image
Returns:
Tensor: the loaded image
"""
with rasterio.open(path) as f:
array: np.typing.NDArray[np.int_] = f.read()
tensor = torch.from_numpy(array).float()
if 'B05' in self.bands:
# Height channel will always be the last dimension
tensor[-1] = torch.div(tensor[-1], 5)
# Extract the bands to be used. E.g., self.bands=("B01", "B02", "B03") will extract the RGB bands.
tensor = tensor[[int(band[-2:]) - 1 for band in self.bands]]
return tensor
Then when a user has defined n bands in self.bands
, the returned sample will only contain those bands, instead of all five. Perhaps self.bands
should also be renamed to self.aerial_bands
, and a self.sentinel_bands
should be added(?)
FLAIR#2 dataset
The
FLAIR #2 <https://github.com/IGNF/FLAIR-2>
dataset is an extensive dataset from the French National Institute of Geographical and Forest Information (IGN) that provides a unique and rich resource for large-scale geospatial analysis.The dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains).
The FLAIR2 dataset is a dataset for semantic segmentation of aerial images. It contains aerial images, sentinel-2 images and masks for 13 classes.
The dataset is split into a training and test set.
Implementation Details
NonGeoDataset
,__init()__
After discussions following #2303, we decided that at least until faulty mask data are fixed the flair2 ds will be of type
NonGeoDataset
. Other than with commonNonGeoDatasets
, FLAIR2 exposes ause_toy
anduse_sentinel
argument. Theuse_toy
-flag will instead use the toy data which is a small subset of data. Theuse_sentinel
argument on the other hand decides whether a sample includes the augmented sentinel data provided by the maintainers of FLAIR2._verify
,_download
,_extract
As each of the splits/sample-types (i.e.
[train, test], [aerial, sentinel, labels]
are contained in a individual zip download, download and extraction has to happen multiple times. On the other hand, the toy dataset is contained in a singular zip. Furthermore, to map the super-patches of the sentinel data to the actual input image, aflair-2_centroids_sp_to_patch.json
is required, which has to be equally has to be downloaded as an individual zip._load_image
,_load_sentinel
,_load_target
For storage reasons, the elevation (5th band) of the image is stored as a uint. The original height thus is multiplied by 5. We decided to divide the height by 5 to get the original height, to make the trained model more usable for other data. See Questions please.
As mentioned previously, additional metadata has to be used to get from the$T \times C=10 \times W \times H$ vary both $T$ and $W$ , $H$ . This is problematic for the
sentinel.npy
to the actual area. Initially for debugging reasons, we implemented to return not the cropped image but the original data and the cropping-slices (i.e. indices). Consequently, the images can be plot in a more meaningful matter. Otherwise, the resolution is so low that one can hardly recognize features. This was crucial for debugging to find the correct logic (classic y, x instead of x, y ordering mistake). We do not know if this is smart for "production code". See Questions please.Moreover, the dimensions of the sentinel data
datamodule
. We have not done extensive research, but the varying dimensions seem to bug the module. Disabling theuse_sentinel
-flag will make the module work.The labels include values from 1 to 19. The datapaper clearly mentions grouping classes$> 13$ into one class
other
due to underrepresentation. We followed this suggestion. Furthermore, rescaling from 0 to 12 was applied. See Questions please.Questions
How shall we load/provide sentinel data? As cropped data or any other way. I do not see the current implementation as fit for production.
Shall we rescale the Classes to start from 0? Shall we group the classes as suggested in the datapaper?
Check integrity in
download_url
does not seem to work (in unit-tests), why?check_integrity
call otherwise it passes, even if md5s do not match.The github actions on the forked repo produce a magic ruff error (https://github.com/MathiasBaumgartinger/torchgeo/actions/runs/11687694109/job/32556175383#step:7:1265). Can you help me resolve this mystery?
TODOs/FIXMEs